随着视频数量的越来越多,对技术的需求很大,可以帮助人们迅速导航到他们感兴趣的视频片段。但是,当前的视频理解主要理解主要是视频内容摘要,而几乎没有努力,而对探索视频的结构。受文本轮廓生成的启发,我们介绍了一项新颖的视频理解任务,即视频大纲生成(VOG)。该任务定义为包含两个子任务:(1)首先根据内容结构对视频进行分割,然后(2)为每个段生成一个标题。要学习和评估VOG,我们注释了一个10K+数据集,称为Duvog。具体来说,我们使用OCR工具来识别视频的字幕。然后,要求注释者将字幕分为章节,并将每个章节分为标题。在视频中,突出显示的文本往往是标题,因为它更有可能引起人们的注意。因此,我们提出了一个视觉字幕功能增强的视频大纲生成模型(VSENET),该模型将文本字幕及其视觉字体大小和位置作为输入。我们将VOG任务视为一个序列标记问题,该问题提取了跨标题的位置,然后将其重写以形成最终大纲。此外,基于视频概述和文本概述之间的相似性,我们使用大量文章带有章节标题来预先我们的模型。 Duvog上的实验表明,我们的模型在很大程度上胜过其他基线方法,对于视频分割水平达到了77.1的F1得分,对于标题生成级别的Rouge-L_F0.5的85.0。
translated by 谷歌翻译
出色的图像文本检索模型取决于高质量标记的数据。尽管现有图像文本检索数据集的构建者努力确保标题与链接的图像匹配,但它们无法阻止字幕拟合其他图像。我们观察到,如此多的匹配现象在广泛使用的检索数据集中非常普遍,其中一个标题可以描述多达178张图像。这些较大的匹配失误数据不仅使训练中的模型混淆,而且还会削弱评估精度。受视觉和文本核心任务的启发,我们提出了一个多模式的核心分类器,以确定句子是否由图像和其链接的字幕所带来。随后,我们通过将这些需要的字幕添加为图像的附加标签来修改图像文本检索数据集,并制定通用可变率策略,以教授检索模型以区分所需的字幕和其他负样本。在实验中,我们手动注释了一个需要校正的图像文本检索数据集进行评估。结果表明,所提出的元素分类器可实现约78%的精度,并始终提高图像文本检索基线的性能。
translated by 谷歌翻译
The lack of label data is one of the significant bottlenecks for Chinese Spelling Check (CSC). Existing researches use the method of automatic generation by exploiting unlabeled data to expand the supervised corpus. However, there is a big gap between the real input scenario and automatic generated corpus. Thus, we develop a competitive general speller ECSpell which adopts the Error Consistent masking strategy to create data for pretraining. This error consistency masking strategy is used to specify the error types of automatically generated sentences which is consistent with real scene. The experimental result indicates our model outperforms previous state-of-the-art models on the general benchmark. Moreover, spellers often work within a particular domain in real life. Due to lots of uncommon domain terms, experiments on our built domain specific datasets show that general models perform terribly. Inspired by the common practice of input methods, we propose to add an alterable user dictionary to handle the zero-shot domain adaption problem. Specifically, we attach a User Dictionary guided inference module (UD) to a general token classification based speller. Our experiments demonstrate that ECSpell$^{UD}$, namely ECSpell combined with UD, surpasses all the other baselines largely, even approaching the performance on the general benchmark.
translated by 谷歌翻译
语义角色标签(SRL)是NLP社区的一项基本而艰巨的任务。 SRL的最新作品主要分为两行:1)基于生物的; 2)基于跨度的。尽管普遍存在,但它们具有不考虑内部论证结构的一些内在缺点,可能会阻碍模型的表现力。关键挑战是参数是平坦的结构,并且在参数中没有确定的子树实现。为了解决这个问题,在本文中,我们建议将平坦的论点跨越为潜在子树,因此将SRL缩小为树解析任务。特别是,我们为制剂配备了新型的跨度限制的treecrf,以使树结构跨度感知,并将其进一步扩展到二阶情况。我们在Conll05和Conll12基准测试上进行了广泛的实验。结果表明,我们的方法的性能比所有以前的语法 - 不知不线作品都更好,在端到端和w/ w/ w/ gold prepticates设置下实现了新的最先进的作品。
translated by 谷歌翻译
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译
A noisy training set usually leads to the degradation of the generalization and robustness of neural networks. In this paper, we propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels. Specifically, we first present a Scalable Penalized Regression (SPR) method, to model the linear relation between network features and one-hot labels. In SPR, the clean data are identified by the zero mean-shift parameters solved in the regression model. We theoretically show that SPR can recover clean data under some conditions. Under general scenarios, the conditions may be no longer satisfied; and some noisy data are falsely selected as clean data. To solve this problem, we propose a data-adaptive method for Scalable Penalized Regression with Knockoff filters (Knockoffs-SPR), which is provable to control the False-Selection-Rate (FSR) in the selected clean data. To improve the efficiency, we further present a split algorithm that divides the whole training set into small pieces that can be solved in parallel to make the framework scalable to large datasets. While Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data. Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our framework and validate the theoretical results of Knockoffs-SPR. Our code and pre-trained models will be released.
translated by 谷歌翻译
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese. To this end, we propose a Chinese cOrpus foR Gender bIas Probing and Mitigation CORGI-PM, which contains 32.9k sentences with high-quality labels derived by following an annotation scheme specifically developed for gender bias in the Chinese context. Moreover, we address three challenges for automatic textual gender bias mitigation, which requires the models to detect, classify, and mitigate textual gender bias. We also conduct experiments with state-of-the-art language models to provide baselines. To our best knowledge, CORGI-PM is the first sentence-level Chinese corpus for gender bias probing and mitigation.
translated by 谷歌翻译
Medical image segmentation (MIS) is essential for supporting disease diagnosis and treatment effect assessment. Despite considerable advances in artificial intelligence (AI) for MIS, clinicians remain skeptical of its utility, maintaining low confidence in such black box systems, with this problem being exacerbated by low generalization for out-of-distribution (OOD) data. To move towards effective clinical utilization, we propose a foundation model named EvidenceCap, which makes the box transparent in a quantifiable way by uncertainty estimation. EvidenceCap not only makes AI visible in regions of uncertainty and OOD data, but also enhances the reliability, robustness, and computational efficiency of MIS. Uncertainty is modeled explicitly through subjective logic theory to gather strong evidence from features. We show the effectiveness of EvidenceCap in three segmentation datasets and apply it to the clinic. Our work sheds light on clinical safe applications and explainable AI, and can contribute towards trustworthiness in the medical domain.
translated by 谷歌翻译
Most Graph Neural Networks follow the message-passing paradigm, assuming the observed structure depicts the ground-truth node relationships. However, this fundamental assumption cannot always be satisfied, as real-world graphs are always incomplete, noisy, or redundant. How to reveal the inherent graph structure in a unified way remains under-explored. We proposed PRI-GSL, a Graph Structure Learning framework guided by the Principle of Relevant Information, providing a simple and unified framework for identifying the self-organization and revealing the hidden structure. PRI-GSL learns a structure that contains the most relevant yet least redundant information quantified by von Neumann entropy and Quantum Jensen-Shannon divergence. PRI-GSL incorporates the evolution of quantum continuous walk with graph wavelets to encode node structural roles, showing in which way the nodes interplay and self-organize with the graph structure. Extensive experiments demonstrate the superior effectiveness and robustness of PRI-GSL.
translated by 谷歌翻译
State space models (SSMs) have demonstrated state-of-the-art sequence modeling performance in some modalities, but underperform attention in language modeling. Moreover, despite scaling nearly linearly in sequence length instead of quadratically, SSMs are still slower than Transformers due to poor hardware utilization. In this paper, we make progress on understanding the expressivity gap between SSMs and attention in language modeling, and on reducing the hardware barrier between SSMs and attention. First, we use synthetic language modeling tasks to understand the gap between SSMs and attention. We find that existing SSMs struggle with two capabilities: recalling earlier tokens in the sequence and comparing tokens across the sequence. To understand the impact on language modeling, we propose a new SSM layer, H3, that is explicitly designed for these abilities. H3 matches attention on the synthetic languages and comes within 0.4 PPL of Transformers on OpenWebText. Furthermore, a hybrid 125M-parameter H3-attention model that retains two attention layers surprisingly outperforms Transformers on OpenWebText by 1.0 PPL. Next, to improve the efficiency of training SSMs on modern hardware, we propose FlashConv. FlashConv uses a fused block FFT algorithm to improve efficiency on sequences up to 8K, and introduces a novel state passing algorithm that exploits the recurrent properties of SSMs to scale to longer sequences. FlashConv yields 2$\times$ speedup on the long-range arena benchmark and allows hybrid language models to generate text 1.6$\times$ faster than Transformers. Using FlashConv, we scale hybrid H3-attention language models up to 1.3B parameters on the Pile and find promising initial results, achieving lower perplexity than Transformers and outperforming Transformers in zero- and few-shot learning on a majority of tasks in the SuperGLUE benchmark.
translated by 谷歌翻译